-
Notifications
You must be signed in to change notification settings - Fork 648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OT Collector trace exporter #405
OT Collector trace exporter #405
Conversation
WIP: Currently working on unit tests, docs and E2E validation |
Codecov Report
@@ Coverage Diff @@
## master #405 +/- ##
==========================================
+ Coverage 88.11% 89.23% +1.12%
==========================================
Files 41 41
Lines 2078 2090 +12
Branches 238 237 -1
==========================================
+ Hits 1831 1865 +34
+ Misses 173 156 -17
+ Partials 74 69 -5
Continue to review full report at Codecov.
|
Removing unnecessary files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this!
I have some comments, most of them are nits, I think reviewing all the details would take too long.
How do you try it?, I followed the instructions in the readme and run the collector with docker-compose:
mvb@asus ~/.../docker $ docker-compose up
Starting docker_collector_1 ... done
Attaching to docker_collector_1
collector_1 | {"level":"info","ts":1582205460.2118967,"caller":"service/service.go:341","msg":"Starting OpenTelemetry Contrib Collector...","Version":"latest","GitHash":"a9f4526","NumCPU":4}
collector_1 | {"level":"info","ts":1582205460.2120395,"caller":"service/service.go:157","msg":"Setting up own telemetry..."}
collector_1 | {"level":"info","ts":1582205460.2133625,"caller":"service/telemetry.go:111","msg":"Serving Prometheus metrics","port":8888}
collector_1 | {"level":"info","ts":1582205460.213417,"caller":"service/service.go:190","msg":"Loading configuration..."}
collector_1 | {"level":"info","ts":1582205460.2144334,"caller":"service/service.go:198","msg":"Applying configuration..."}
collector_1 | {"level":"info","ts":1582205460.215752,"caller":"builder/exporters_builder.go:239","msg":"Exporter is enabled.","exporter":"logging"}
collector_1 | {"level":"info","ts":1582205460.2158055,"caller":"service/service.go:249","msg":"Starting exporters..."}
collector_1 | {"level":"info","ts":1582205460.2158175,"caller":"builder/exporters_builder.go:80","msg":"Exporter is starting...","exporter":"logging"}
collector_1 | {"level":"info","ts":1582205460.2159908,"caller":"builder/exporters_builder.go:85","msg":"Exporter started.","exporter":"logging"}
collector_1 | {"level":"info","ts":1582205460.2163672,"caller":"builder/pipelines_builder.go:177","msg":"Pipeline is enabled.","pipelines":"traces"}
collector_1 | {"level":"info","ts":1582205460.2163959,"caller":"service/service.go:262","msg":"Starting processors..."}
collector_1 | {"level":"info","ts":1582205460.216422,"caller":"builder/pipelines_builder.go:47","msg":"Pipeline is starting...","pipeline":"traces"}
collector_1 | {"level":"info","ts":1582205460.2164388,"caller":"builder/pipelines_builder.go:57","msg":"Pipeline is started.","pipeline":"traces"}
collector_1 | {"level":"info","ts":1582205460.216514,"caller":"builder/receivers_builder.go:210","msg":"Receiver is enabled.","receiver":"opencensus","datatype":"traces"}
collector_1 | {"level":"info","ts":1582205460.2165282,"caller":"service/service.go:274","msg":"Starting receivers..."}
collector_1 | {"level":"info","ts":1582205460.216537,"caller":"builder/receivers_builder.go:63","msg":"Receiver is starting...","receiver":"opencensus"}
collector_1 | {"level":"info","ts":1582205460.2166831,"caller":"builder/receivers_builder.go:68","msg":"Receiver started.","receiver":"opencensus"}
collector_1 | {"level":"info","ts":1582205460.216698,"caller":"service/service.go:167","msg":"Everything is ready. Begin running and processing data."}
But when I run the example and don't see anything. According to the configuration it should log the spans on the collector's terminals. Am I missing something?
Another point, is the name of the package the right one? opentelemetry-ext-collector
is better?
ext/opentelemetry-ext-otcollector/src/opentelemetry/ext/otcollector/trace_exporter/__init__.py
Outdated
Show resolved
Hide resolved
ext/opentelemetry-ext-otcollector/src/opentelemetry/ext/otcollector/trace_exporter/__init__.py
Outdated
Show resolved
Hide resolved
ext/opentelemetry-ext-otcollector/src/opentelemetry/ext/otcollector/util.py
Outdated
Show resolved
Hide resolved
ext/opentelemetry-ext-otcollector/tests/test_otcollector_exporter.py
Outdated
Show resolved
Hide resolved
ext/opentelemetry-ext-otcollector/tests/test_otcollector_exporter.py
Outdated
Show resolved
Hide resolved
collector_span = trace_pb2.Span( | ||
name=trace_pb2.TruncatableString(value=span.name), | ||
kind=utils.get_collector_span_kind(span.kind), | ||
trace_id=span.context.trace_id.to_bytes(16, "big"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity: Why "big"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no particular reason, is there any issue with it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to know if the collector expects the traces in a particular format, it is possible that a collection system is not able to assemble a full trace if we use the wrong trace id here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/census-instrumentation/opencensus-proto/blob/master/src/opencensus/proto/trace/v1/trace.proto#L41 there are no details about that, @owais s this something you know?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't matter as long as the trace ID is consistent in all spans for a trace but I still suggest to test it out end to end. Generate a trace in python, export using this lib and check how the collector interprets it. You can use the file exporter to write the received spans to a file
exporters:
file:
path: ./filename.json
service:
pipelines:
traces:
receivers: [opencensusreceiver]
exporters: [file]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This absolutely matters. Traces can span multiple systems, multiple OpenTelemetry implementations/languages, even systems not using OpenTelemetry at all but e.g. OpenTracing+jaeger to report to the same back end.
The "problem" here is that we use integers in Python to represent the trace ID, which is semantically a byte array. If we get an incoming trace ID like "4bf92f3577b34da6a3ce929d0e0e4736" then I think it is clear that trace_id[0] == 0x4b
and trace_id[15] == 0x36
must be the case. Thus, "big" sounds correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I configured an http server example to use the collector exporter and a client to use Jaeger, when "big" is used the trace is correctly assembled, so "big" should be the right choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However I'm not sure this is true in all cases, so probably we want to keep an eye on this int <-> bytes <-> string conversions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we read the trace ID from the wire we assume big-endianness, so "big" is right here unless we want to reverse it on the way out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I tested this and got it working with your instructions. One note, it might be helpful to add jaeger to the list of exporters in the collector config to allow users to see something at the end of the example
jaeger_grpc:
endpoint: jaeger-all-in-one:14250
The docker-compose could be updated to do something like this:
version: "2"
services:
# Collector
collector:
image: omnition/opentelemetry-collector-contrib:latest
command: ["--config=/conf/collector-config.yaml", "--log-level=DEBUG"]
volumes:
- ./collector-config.yaml:/conf/collector-config.yaml
ports:
- "55678:55678"
networks:
- basic
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
networks:
- basic
ext/opentelemetry-ext-otcollector/tests/test_otcollector_exporter.py
Outdated
Show resolved
Hide resolved
ext/opentelemetry-ext-otcollector/src/opentelemetry/ext/otcollector/trace_exporter/__init__.py
Outdated
Show resolved
Hide resolved
After some chat with @codeboten I was able to use it. It turns out that I was using a |
Thanks for reviewing @codeboten and @mauriciovasquezbernal, most comments must be resolved now please let me know if you have more feedback, @mauriciovasquezbernal regarding package name, let me know if you have strong opinions about it, collector word is pretty generic and could apply to a lot of stuff that is why I added the otcollector to avoid confusion. |
…g/opentelemetry-python into hectorhdzg/otcollector
# optional: | ||
# endpoint="myCollectorUrl:55678", | ||
# service_name="test_service", | ||
# host_name="http://localhost", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this have a protocol prefixing the host name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, is actually expecting machine/container name
Nice, thanks for addressing my comments @hectorhdzg, just added one more question. It would be good to get clarity on @mauriciovasquezbernal's question on the byteorder, I wasn't able to dig anything regarding this in the collector repo. |
8, "big" | ||
) | ||
|
||
if span.context.trace_state is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is not possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believed I triggered this one with a manually created Span in a unit test, same as others checks in this method I added if x instead of if x is not None.
for (key, value) in span.context.trace_state.items(): | ||
collector_span.tracestate.entries.add(key=key, value=value) | ||
|
||
if span.attributes is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This cannot happen:
self.attributes = Span.empty_attributes |
Maybe you wan to use if span.attributes
instead, but I guess the check can simply be omitted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have multiple "if" checks without the "is not None" for all collections now, I guess is safer to check even if we expect this value to be there.
packages=find_namespace: | ||
install_requires = | ||
grpcio >= 1.0.0, < 2.0.0 | ||
opencensus-proto >= 0.1.0, < 1.0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an OpenCensus exporter? You write OT exporter in the title.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well is in fact exporting to OT Collector through OpenCensus receiver, OT receiver will be ready in several weeks I added more details in first comment in the PR, we will need to revisit this one and add code to handle OT receiver using OT proto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI I spent some time trying to make it work with OT proto building the protobuf files myself then realizing the receiver is not there yet, people are interested in having this ready so decided to take same approach as JS SDK and support it through OpenCensus receiver, once OT receiver is ready hopefully changes only affect the span transformation and some other small pieces of code
collector_span = trace_pb2.Span( | ||
name=trace_pb2.TruncatableString(value=span.name), | ||
kind=utils.get_collector_span_kind(span.kind), | ||
trace_id=span.context.trace_id.to_bytes(16, "big"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This absolutely matters. Traces can span multiple systems, multiple OpenTelemetry implementations/languages, even systems not using OpenTelemetry at all but e.g. OpenTracing+jaeger to report to the same back end.
The "problem" here is that we use integers in Python to represent the trace ID, which is semantically a byte array. If we get an incoming trace ID like "4bf92f3577b34da6a3ce929d0e0e4736" then I think it is clear that trace_id[0] == 0x4b
and trace_id[15] == 0x36
must be the case. Thus, "big" sounds correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some nits, but it looks good to me.
About the docker folder, do you think we can move it to a more generic place?, also, would it make sense to update other examples to use this exporter as well?
ext/opentelemetry-ext-otcollector/tests/test_otcollector_exporter.py
Outdated
Show resolved
Hide resolved
ext/opentelemetry-ext-otcollector/tests/test_otcollector_exporter.py
Outdated
Show resolved
Hide resolved
ext/opentelemetry-ext-otcollector/tests/test_otcollector_exporter.py
Outdated
Show resolved
Hide resolved
ext/opentelemetry-ext-otcollector/tests/test_otcollector_exporter.py
Outdated
Show resolved
Hide resolved
Thanks for reviewing @mauriciovasquezbernal regarding examples, I believe we need to some organization in there with better folder structure and using multiple exporters like you mentioned, I don't want to focus to much about that in this PR for smoother reviews but definitely a good idea |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments from running the examples, still need to check the exporter logic.
ext/opentelemetry-ext-otcollector/src/opentelemetry/ext/otcollector/version.py
Outdated
Show resolved
Hide resolved
Co-Authored-By: Chris Kleinknecht <[email protected]>
…g/opentelemetry-python into hectorhdzg/otcollector
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment about the exporter version number, but the exporter logic and tests LGTM.
ext/opentelemetry-ext-otcollector/src/opentelemetry/ext/otcollector/util.py
Outdated
Show resolved
Hide resolved
Nice to see this in the collector logs! {"level":"info","ts":1582845653.7891443,"caller":"loggingexporter/logging_exporter.go:36","msg":"TraceExporter","type":"logging","name":"logging","#spans":3} |
Rename TracerSource to TracerProvider (open-telemetry#441) Following discussion in open-telemetry#434, align the name with the specification. Co-authored-by: Chris Kleinknecht <[email protected]> Fix new ext READMEs (open-telemetry#444) Some of the new ext packages had ReStructuredText errors. PyPI rejected the uploads for these packages with: HTTPError: 400 Client Error: The description failed to render for 'text/x-rst'. See https://pypi.org/help/#description-content-type for more information. for url: https://upload.pypi.org/legacy/ Adding attach/detach methods as per spec (open-telemetry#429) This change updates the Context API with the following: - removes the remove_value method - removes the set_current method - adds attach and detach methods Fixes open-telemetry#420 Co-authored-by: Chris Kleinknecht <[email protected]> Make Counter and MinMaxSumCount aggregators thread safe (open-telemetry#439) OT Collector trace exporter (open-telemetry#405) Based on the OpenCensus agent exporter. Fixes open-telemetry#343 Co-authored-by: Chris Kleinknecht <[email protected]> API: Renaming TraceOptions to TraceFlags (open-telemetry#450) Renaming TraceOptions to TraceFlags, which is the term used to describe the flags associated with the trace in the OpenTelemetry specification. Closes open-telemetry#434 api: Implement "named" meters + Remove "Batcher" from Meter constructor (open-telemetry#431) Implements open-telemetry#221. Also fixes open-telemetry#394. Stateful.py and stateless.py in metrics example folder are not changed to use the new loader in anticipation of open-telemetry#422 being merged first and removing them. Lastly, moves InstrumentationInfo from trace.py in the sdk to utils. Prepare to host on readthedocs.org (open-telemetry#452) sdk: Implement observer instrument (open-telemetry#425) Observer instruments are used to capture a current set of values at a point in time [1]. This commit extends the Meter interface to allow to register an observer instrument by pasing a callback that will be executed at collection time. The logic inside collection is updated to consider these instruments and a new ObserverAggregator is implemented. [1] https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-metrics.md#observer-instruments sdk: fix ConsoleSpanExporter (open-telemetry#455) 19d573a ("Add io and formatter options to console exporter (open-telemetry#412)") changed the way spans are printed by using write() instead of print(). In Python 3.x sys.stdout is line-buffered, so the spans were not being printed to the console at the right timing. This commit fixes that by adding an explicit flush() call at the end of the export function , it also changes the default formatter to include a line break. To be precise, only one of the changes was needed to solve the problem, but as a matter of completness both are included, i.e, to handle the case where the formatter chosen by the user doesn't append a line break. jaeger: Usage README Update for opentelemetry-ext-jaeger (open-telemetry#459) Usage docs for opentelemetry-ext-jaeger need to be updated after the change to `TracerSource` with v0.4. Looks like it was partially updated already. Users following the usage docs will currently run into this error: `AttributeError: 'Tracer' object has no attribute 'add_span_processor'` api: Implementing Propagators API to use Context (open-telemetry#446) Implementing Propagators API to use Context. Moving tracecontexthttptextformat to trace/propagation, as TraceContext is specific to trace rather that broader context propagation. Using attach/detach for wsgi and flask extensions, enabling activation of the full context rather that activating of a sub component such as traces. Adding a composite propagator. Co-authored-by: Mauricio Vásquez <[email protected]>
Rename TracerSource to TracerProvider (open-telemetry#441) Following discussion in open-telemetry#434, align the name with the specification. Co-authored-by: Chris Kleinknecht <[email protected]> Fix new ext READMEs (open-telemetry#444) Some of the new ext packages had ReStructuredText errors. PyPI rejected the uploads for these packages with: HTTPError: 400 Client Error: The description failed to render for 'text/x-rst'. See https://pypi.org/help/#description-content-type for more information. for url: https://upload.pypi.org/legacy/ Adding attach/detach methods as per spec (open-telemetry#429) This change updates the Context API with the following: - removes the remove_value method - removes the set_current method - adds attach and detach methods Fixes open-telemetry#420 Co-authored-by: Chris Kleinknecht <[email protected]> Make Counter and MinMaxSumCount aggregators thread safe (open-telemetry#439) OT Collector trace exporter (open-telemetry#405) Based on the OpenCensus agent exporter. Fixes open-telemetry#343 Co-authored-by: Chris Kleinknecht <[email protected]> API: Renaming TraceOptions to TraceFlags (open-telemetry#450) Renaming TraceOptions to TraceFlags, which is the term used to describe the flags associated with the trace in the OpenTelemetry specification. Closes open-telemetry#434 api: Implement "named" meters + Remove "Batcher" from Meter constructor (open-telemetry#431) Implements open-telemetry#221. Also fixes open-telemetry#394. Stateful.py and stateless.py in metrics example folder are not changed to use the new loader in anticipation of open-telemetry#422 being merged first and removing them. Lastly, moves InstrumentationInfo from trace.py in the sdk to utils. Prepare to host on readthedocs.org (open-telemetry#452) sdk: Implement observer instrument (open-telemetry#425) Observer instruments are used to capture a current set of values at a point in time [1]. This commit extends the Meter interface to allow to register an observer instrument by pasing a callback that will be executed at collection time. The logic inside collection is updated to consider these instruments and a new ObserverAggregator is implemented. [1] https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-metrics.md#observer-instruments sdk: fix ConsoleSpanExporter (open-telemetry#455) 19d573a ("Add io and formatter options to console exporter (open-telemetry#412)") changed the way spans are printed by using write() instead of print(). In Python 3.x sys.stdout is line-buffered, so the spans were not being printed to the console at the right timing. This commit fixes that by adding an explicit flush() call at the end of the export function , it also changes the default formatter to include a line break. To be precise, only one of the changes was needed to solve the problem, but as a matter of completness both are included, i.e, to handle the case where the formatter chosen by the user doesn't append a line break. jaeger: Usage README Update for opentelemetry-ext-jaeger (open-telemetry#459) Usage docs for opentelemetry-ext-jaeger need to be updated after the change to `TracerSource` with v0.4. Looks like it was partially updated already. Users following the usage docs will currently run into this error: `AttributeError: 'Tracer' object has no attribute 'add_span_processor'` api: Implementing Propagators API to use Context (open-telemetry#446) Implementing Propagators API to use Context. Moving tracecontexthttptextformat to trace/propagation, as TraceContext is specific to trace rather that broader context propagation. Using attach/detach for wsgi and flask extensions, enabling activation of the full context rather that activating of a sub component such as traces. Adding a composite propagator. Co-authored-by: Mauricio Vásquez <[email protected]>
Rename TracerSource to TracerProvider (open-telemetry#441) Following discussion in open-telemetry#434, align the name with the specification. Co-authored-by: Chris Kleinknecht <[email protected]> Fix new ext READMEs (open-telemetry#444) Some of the new ext packages had ReStructuredText errors. PyPI rejected the uploads for these packages with: HTTPError: 400 Client Error: The description failed to render for 'text/x-rst'. See https://pypi.org/help/#description-content-type for more information. for url: https://upload.pypi.org/legacy/ Adding attach/detach methods as per spec (open-telemetry#429) This change updates the Context API with the following: - removes the remove_value method - removes the set_current method - adds attach and detach methods Fixes open-telemetry#420 Co-authored-by: Chris Kleinknecht <[email protected]> Make Counter and MinMaxSumCount aggregators thread safe (open-telemetry#439) OT Collector trace exporter (open-telemetry#405) Based on the OpenCensus agent exporter. Fixes open-telemetry#343 Co-authored-by: Chris Kleinknecht <[email protected]> API: Renaming TraceOptions to TraceFlags (open-telemetry#450) Renaming TraceOptions to TraceFlags, which is the term used to describe the flags associated with the trace in the OpenTelemetry specification. Closes open-telemetry#434 api: Implement "named" meters + Remove "Batcher" from Meter constructor (open-telemetry#431) Implements open-telemetry#221. Also fixes open-telemetry#394. Stateful.py and stateless.py in metrics example folder are not changed to use the new loader in anticipation of open-telemetry#422 being merged first and removing them. Lastly, moves InstrumentationInfo from trace.py in the sdk to utils. Prepare to host on readthedocs.org (open-telemetry#452) sdk: Implement observer instrument (open-telemetry#425) Observer instruments are used to capture a current set of values at a point in time [1]. This commit extends the Meter interface to allow to register an observer instrument by pasing a callback that will be executed at collection time. The logic inside collection is updated to consider these instruments and a new ObserverAggregator is implemented. [1] https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/api-metrics.md#observer-instruments sdk: fix ConsoleSpanExporter (open-telemetry#455) 19d573a ("Add io and formatter options to console exporter (open-telemetry#412)") changed the way spans are printed by using write() instead of print(). In Python 3.x sys.stdout is line-buffered, so the spans were not being printed to the console at the right timing. This commit fixes that by adding an explicit flush() call at the end of the export function , it also changes the default formatter to include a line break. To be precise, only one of the changes was needed to solve the problem, but as a matter of completness both are included, i.e, to handle the case where the formatter chosen by the user doesn't append a line break. jaeger: Usage README Update for opentelemetry-ext-jaeger (open-telemetry#459) Usage docs for opentelemetry-ext-jaeger need to be updated after the change to `TracerSource` with v0.4. Looks like it was partially updated already. Users following the usage docs will currently run into this error: `AttributeError: 'Tracer' object has no attribute 'add_span_processor'` api: Implementing Propagators API to use Context (open-telemetry#446) Implementing Propagators API to use Context. Moving tracecontexthttptextformat to trace/propagation, as TraceContext is specific to trace rather that broader context propagation. Using attach/detach for wsgi and flask extensions, enabling activation of the full context rather that activating of a sub component such as traces. Adding a composite propagator. Co-authored-by: Mauricio Vásquez <[email protected]>
Based on OpenCensus Agent exporter
Current implementation use OpenCensus receiver available in Collector, this will need to be updated eventually to use OT receiver when is ready, and Python compiled files for Proto are available once proto definition is final.
Fixes #343